Simulations of complementarity systems using time-stepping

One of the useful outcomes of the theory of complementarity systems is a new family of methods for numerical simulation of discontinuous systems. Here we will demonstrate the essence by introducing the method of time-stepping. And we do it by means of an example.

Example 1 (Simulation using time-stepping) Consider the following discontinuous dynamical system in \mathbb R^2: \begin{aligned} \dot x_1 &= -\operatorname{sign} x_1 + 2 \operatorname{sign} x_2\\ \dot x_2 &= -2\operatorname{sign} x_1 -\operatorname{sign} x_2. \end{aligned}

The state portrait is in Fig. 1.

Show the code
f₁(x) = -sign(x[1]) + 2*sign(x[2])
f₂(x) = -2*sign(x[1]) - sign(x[2])
f(x) = [f₁(x), f₂(x)]

using CairoMakie
fig = Figure(size = (800, 800),fontsize=20)
ax = Axis(fig[1, 1], xlabel = "x₁", ylabel = "x₂")
streamplot!(ax,(x₁,x₂)->Point2f(f([x₁,x₂])), -2.0..2.0, -2.0..2.0, colormap = :magma)
fig
Precompiling CairoMakie...
    599.4 ms  ✓ libsixel_jll
    449.7 ms  ✓ LogExpFunctions → LogExpFunctionsInverseFunctionsExt
    755.6 ms  ✓ Tables
   1009.7 ms  ✓ LogExpFunctions → LogExpFunctionsChainRulesCoreExt
    608.7 ms  ✓ StaticArrays → StaticArraysChainRulesCoreExt
    754.4 ms  ✓ QuadGK
    579.0 ms  ✓ ConstructionBase → ConstructionBaseStaticArraysExt
    550.2 ms  ✓ Adapt → AdaptStaticArraysExt
   1865.2 ms  ✓ SpecialFunctions
    505.8 ms  ✓ libwebp_jll
   1062.3 ms  ✓ Cairo
   1807.6 ms  ✓ IntervalArithmetic
    818.6 ms  ✓ StructArrays
   2830.9 ms  ✓ PNGFiles
   2057.7 ms  ✓ Sixel
    547.6 ms  ✓ ColorVectorSpace → SpecialFunctionsExt
    786.1 ms  ✓ HypergeometricFunctions
   1409.5 ms  ✓ SpecialFunctions → SpecialFunctionsChainRulesCoreExt
   1823.2 ms  ✓ Interpolations
    496.4 ms  ✓ StructArrays → StructArraysAdaptExt
    822.7 ms  ✓ StructArrays → StructArraysStaticArraysExt
    367.9 ms  ✓ StructArrays → StructArraysLinearAlgebraExt
    470.6 ms  ✓ StructArrays → StructArraysSparseArraysExt
   1713.9 ms  ✓ WebP
   1322.6 ms  ✓ StatsFuns
   2589.4 ms  ✓ ExactPredicates
   1097.1 ms  ✓ Interpolations → InterpolationsUnitfulExt
    604.7 ms  ✓ StatsFuns → StatsFunsInverseFunctionsExt
   1008.8 ms  ✓ StatsFuns → StatsFunsChainRulesCoreExt
   3673.3 ms  ✓ GeometryBasics
   3352.1 ms  ✓ Distributions
   2888.0 ms  ✓ DelaunayTriangulation
    986.4 ms  ✓ Packing
   1091.3 ms  ✓ ShaderAbstractions
   1371.0 ms  ✓ FreeTypeAbstraction
   1014.5 ms  ✓ Distributions → DistributionsTestExt
    866.1 ms  ✓ Distributions → DistributionsChainRulesCoreExt
   2871.0 ms  ✓ MakieCore
   4006.3 ms  ✓ GridLayoutBase
   1701.8 ms  ✓ KernelDensity
   3396.7 ms  ✓ MathTeXEngine
  30576.8 ms  ✓ TiffImages
  68409.4 ms  ✓ Makie
  23249.2 ms  ✓ CairoMakie
  44 dependencies successfully precompiled in 124 seconds. 231 already precompiled.
Figure 1: State portrait of the discontinuous system

One particular (vector) state trajectory obtained by some default ODE solver is in Fig. 2.

Show the code
using DifferentialEquations

function f!(dx,x,p,t)
    dx[1] = -sign(x[1]) + 2*sign(x[2])
    dx[2] = -2*sign(x[1]) - sign(x[2])
end

x0 = [-1.0, 1.0]
tfinal = 2.0
tspan = (0.0,tfinal)
prob = ODEProblem(f!,x0,tspan)
sol = solve(prob)

using Plots
Plots.plot(sol,xlabel="t",ylabel="x",label=false,lw=3)
Precompiling DifferentialEquations...
    396.1 ms  ✓ Parameters
    544.5 ms  ✓ DiffRules
    699.4 ms  ✓ DifferentiationInterface
    852.8 ms  ✓ SparseMatrixColorings
    916.9 ms  ✓ ArnoldiMethod
    533.3 ms  ✓ ResettableStacks
    578.0 ms  ✓ Accessors → StaticArraysExt
    514.9 ms  ✓ FiniteDiff → FiniteDiffStaticArraysExt
    926.5 ms  ✓ LazyArrays → LazyArraysStaticArraysExt
    577.3 ms  ✓ StaticArrayInterface → StaticArrayInterfaceStaticArraysExt
    517.9 ms  ✓ LevyArea
   1070.4 ms  ✓ ExponentialUtilities → ExponentialUtilitiesStaticArraysExt
   1249.8 ms  ✓ RecursiveArrayTools
   1063.9 ms  ✓ FastGaussQuadrature
    652.4 ms  ✓ LoopVectorization → SpecialFunctionsExt
    677.0 ms  ✓ DifferentiationInterface → DifferentiationInterfaceFiniteDiffExt
    358.8 ms  ✓ DifferentiationInterface → DifferentiationInterfaceChainRulesCoreExt
    666.8 ms  ✓ DifferentiationInterface → DifferentiationInterfaceStaticArraysExt
    445.6 ms  ✓ DifferentiationInterface → DifferentiationInterfaceSparseArraysExt
    712.4 ms  ✓ DifferentiationInterface → DifferentiationInterfaceSparseMatrixColoringsExt
    619.1 ms  ✓ RecursiveArrayTools → RecursiveArrayToolsSparseArraysExt
    697.0 ms  ✓ RecursiveArrayTools → RecursiveArrayToolsFastBroadcastExt
   2249.1 ms  ✓ ForwardDiff
    632.5 ms  ✓ FastPower → FastPowerForwardDiffExt
    702.2 ms  ✓ PreallocationTools
    844.5 ms  ✓ ForwardDiff → ForwardDiffStaticArraysExt
    716.1 ms  ✓ NLSolversBase
    581.4 ms  ✓ DifferentiationInterface → DifferentiationInterfaceForwardDiffExt
    824.4 ms  ✓ LoopVectorization → ForwardDiffExt
   3006.7 ms  ✓ Graphs
    484.4 ms  ✓ RecursiveArrayTools → RecursiveArrayToolsForwardDiffExt
    802.2 ms  ✓ VertexSafeGraphs
   1254.3 ms  ✓ LineSearches
   1009.9 ms  ✓ NLsolve
   1964.4 ms  ✓ SparseDiffTools
   2028.0 ms  ✓ Optim
   5810.9 ms  ✓ SciMLBase
   1059.2 ms  ✓ SparseDiffTools → SparseDiffToolsPolyesterExt
    863.4 ms  ✓ SciMLBase → SciMLBaseChainRulesCoreExt
   1670.6 ms  ✓ SciMLJacobianOperators
   3383.6 ms  ✓ DiffEqBase
   2409.3 ms  ✓ LineSearch
   1064.1 ms  ✓ DiffEqBase → DiffEqBaseSparseArraysExt
   1103.7 ms  ✓ DiffEqBase → DiffEqBaseChainRulesCoreExt
   1172.6 ms  ✓ DiffEqBase → DiffEqBaseDistributionsExt
   3723.6 ms  ✓ NonlinearSolveBase
   1008.2 ms  ✓ LineSearch → LineSearchLineSearchesExt
   2385.5 ms  ✓ DiffEqCallbacks
   2653.0 ms  ✓ OrdinaryDiffEqCore
   1035.3 ms  ✓ NonlinearSolveBase → NonlinearSolveBaseSparseMatrixColoringsExt
   2306.1 ms  ✓ JumpProcesses
    978.9 ms  ✓ NonlinearSolveBase → NonlinearSolveBaseForwardDiffExt
   2559.1 ms  ✓ DiffEqNoiseProcess
    870.7 ms  ✓ NonlinearSolveBase → NonlinearSolveBaseSparseArraysExt
    655.4 ms  ✓ NonlinearSolveBase → NonlinearSolveBaseLineSearchExt
   1087.0 ms  ✓ NonlinearSolveBase → NonlinearSolveBaseBandedMatricesExt
    913.3 ms  ✓ NonlinearSolveBase → NonlinearSolveBaseDiffEqBaseExt
   1240.4 ms  ✓ OrdinaryDiffEqCore → OrdinaryDiffEqCoreEnzymeCoreExt
   1931.2 ms  ✓ SteadyStateDiffEq
   2735.1 ms  ✓ BracketingNonlinearSolve
   1443.7 ms  ✓ OrdinaryDiffEqFunctionMap
   9657.3 ms  ✓ Sundials
   1696.4 ms  ✓ OrdinaryDiffEqPRK
   1817.1 ms  ✓ OrdinaryDiffEqExplicitRK
   5810.5 ms  ✓ NonlinearSolveSpectralMethods
   2266.9 ms  ✓ OrdinaryDiffEqStabilizedRK
   5077.2 ms  ✓ OrdinaryDiffEqTsit5
  18255.8 ms  ✓ LinearSolve
   1543.0 ms  ✓ OrdinaryDiffEqQPRK
   2544.6 ms  ✓ OrdinaryDiffEqSSPRK
   2025.2 ms  ✓ OrdinaryDiffEqRKN
   1967.7 ms  ✓ OrdinaryDiffEqHighOrderRK
   1594.5 ms  ✓ OrdinaryDiffEqSymplecticRK
   1763.0 ms  ✓ OrdinaryDiffEqFeagin
   1023.8 ms  ✓ BracketingNonlinearSolve → BracketingNonlinearSolveForwardDiffExt
   2780.3 ms  ✓ OrdinaryDiffEqLowOrderRK
   1172.0 ms  ✓ NonlinearSolveSpectralMethods → NonlinearSolveSpectralMethodsForwardDiffExt
   2583.3 ms  ✓ OrdinaryDiffEqLowStorageRK
   1574.1 ms  ✓ OrdinaryDiffEqNordsieck
   2050.4 ms  ✓ LinearSolve → LinearSolveEnzymeExt
   2048.7 ms  ✓ LinearSolve → LinearSolveBandedMatricesExt
   2251.2 ms  ✓ LinearSolve → LinearSolveFastAlmostBandedMatricesExt
   1697.8 ms  ✓ LinearSolve → LinearSolveRecursiveArrayToolsExt
   1749.4 ms  ✓ NonlinearSolveBase → NonlinearSolveBaseLinearSolveExt
   1793.6 ms  ✓ OrdinaryDiffEqAdamsBashforthMoulton
   3460.4 ms  ✓ OrdinaryDiffEqDifferentiation
   6807.2 ms  ✓ SimpleNonlinearSolve
   4878.9 ms  ✓ OrdinaryDiffEqExtrapolation
  10478.0 ms  ✓ NonlinearSolveQuasiNewton
   1372.8 ms  ✓ SimpleNonlinearSolve → SimpleNonlinearSolveChainRulesCoreExt
   1300.6 ms  ✓ SimpleNonlinearSolve → SimpleNonlinearSolveDiffEqBaseExt
   2420.3 ms  ✓ NonlinearSolveQuasiNewton → NonlinearSolveQuasiNewtonForwardDiffExt
  15718.5 ms  ✓ NonlinearSolveFirstOrder
  15119.6 ms  ✓ OrdinaryDiffEqRosenbrock
  25439.2 ms  ✓ OrdinaryDiffEqVerner
   3784.1 ms  ✓ BoundaryValueDiffEqCore
   1420.5 ms  ✓ OrdinaryDiffEqLinear
   3688.7 ms  ✓ BoundaryValueDiffEqAscher
   4354.2 ms  ✓ BoundaryValueDiffEqMIRKN
   8838.8 ms  ✓ NonlinearSolve
   2817.3 ms  ✓ NonlinearSolve → NonlinearSolveNLsolveExt
   2977.3 ms  ✓ NonlinearSolve → NonlinearSolveSundialsExt
   3881.0 ms  ✓ OrdinaryDiffEqNonlinearSolve
   3508.4 ms  ✓ OrdinaryDiffEqStabilizedIRK
   3772.3 ms  ✓ OrdinaryDiffEqPDIRK
   4836.0 ms  ✓ OrdinaryDiffEqSDIRK
   3447.3 ms  ✓ OrdinaryDiffEqIMEXMultistep
   7057.0 ms  ✓ StochasticDiffEq
   4011.1 ms  ✓ OrdinaryDiffEqExponentialRK
  12532.8 ms  ✓ OrdinaryDiffEqFIRK
   9117.0 ms  ✓ OrdinaryDiffEqBDF
  17967.0 ms  ✓ OrdinaryDiffEqDefault
   8635.6 ms  ✓ OrdinaryDiffEq
  57056.6 ms  ✓ BoundaryValueDiffEqMIRK
   4880.9 ms  ✓ BoundaryValueDiffEqShooting
   5205.3 ms  ✓ DelayDiffEq
  82572.6 ms  ✓ BoundaryValueDiffEqFIRK
   6784.6 ms  ✓ BoundaryValueDiffEq
   6814.0 ms  ✓ DifferentialEquations
  119 dependencies successfully precompiled in 152 seconds. 202 already precompiled.
Precompiling StructArraysExt...
    290.4 ms  ✓ Accessors → StructArraysExt
  1 dependency successfully precompiled in 0 seconds. 20 already precompiled.
Precompiling RecursiveArrayToolsStructArraysExt...
    339.9 ms  ✓ RecursiveArrayTools → RecursiveArrayToolsStructArraysExt
  1 dependency successfully precompiled in 0 seconds. 47 already precompiled.
Precompiling SciMLBaseMakieExt...
   3933.7 ms  ✓ SciMLBase → SciMLBaseMakieExt
  1 dependency successfully precompiled in 5 seconds. 305 already precompiled.
Precompiling DiffEqBaseUnitfulExt...
    741.9 ms  ✓ DiffEqBase → DiffEqBaseUnitfulExt
  1 dependency successfully precompiled in 1 seconds. 123 already precompiled.
Precompiling SparseMatrixColoringsColorsExt...
    464.7 ms  ✓ SparseMatrixColorings → SparseMatrixColoringsColorsExt
  1 dependency successfully precompiled in 1 seconds. 29 already precompiled.
Precompiling FileIOExt...
   1625.4 ms  ✓ Plots → FileIOExt
  1 dependency successfully precompiled in 2 seconds. 191 already precompiled.
Precompiling GeometryBasicsExt...
   1719.3 ms  ✓ Plots → GeometryBasicsExt
  1 dependency successfully precompiled in 2 seconds. 207 already precompiled.
Figure 2: Trajectory of the discontinuous system

We can also plot the trajectory in the state space, as in Fig. 3.

Show the code
Plots.plot(sol[1,:],sol[2,:],xlabel="x₁",ylabel="x₂",label=false,aspect_ratio=:equal,lw=3,xlims=(-1.2,0.5))
Figure 3: Trajectory of the discontinuous system in the state space

Note that for the default setting of absolute and relative tolerances, the adaptive-step ODE solver actually needed a huge number of steps:

Show the code
length(sol.t)
288594

This is indeed quite a lot. Can we do better?

In this particular example, we have the benefit of being able to analyze the solution analytically. In particular, we can ask: how fast does the solution approach the origin? We turn this into a question of how fast the distance from the origin decreases. With some anticipation we use the 1-norm \|\bm x\|_1 = |x_1| + |x_2| to the distance of the state from the origin. We then ask: \frac{\mathrm d}{\mathrm dt}\|\bm x\|_1 = ?

We avoid the troubles with nonsmoothness of the absolute value by considering each quadrant separately. Let’s start in the first (upper right) quadrant, that is, x_1>0 and x_2>0, from which it follows that |x_1| = x_1, \;|x_2| = x_2, and therefore \frac{\mathrm d}{\mathrm dt}\|\bm x\|_1 = \dot x_1 + \dot x_2 = 1 - 3 = -2.

The situation is identical in the other quadrants. We do not worry that the norm is undefined on the axes, because the trajectory obviously just crosses them.

The conclusion is that the trajectory will hit the origin in finite time (!): with, say, x_1(0) = 1 and x_2(0) = 1, the trajectory hits the origin at t=(|x_1(0)|+|x_2(0)|)/2 = 1. Surprisingly (or not), this will happen after an infinite number of revolutions around the origin…

We have seen above in Fig. 2 that a default solver for ODE can handle the situation in a decent way. But we have also seen that this was at the cost of a huge number of steps. To get some insight, we implement our own solver.

Forward Euler with fixed step size

We start with the simplest of all methods, the forward Euler method with a fixed step length. The computation of the next state is done just by the assignment \begin{aligned} {\color{blue}x_{1,k+1}} &= x_{1,k} + h (-\operatorname{sign} x_{1,k} + 2 \operatorname{sign} x_{2,k})\\ {\color{blue}x_{2,k+1}} &= x_{2,k} + h (-2\operatorname{sign} x_{1,k} - \operatorname{sign} x_{2,k}). \end{aligned}

Blue color in our text is used to denote the unknown

We use the blue color here (and in the next few paragraphs) to highlight what is uknown.

Show the code
f(x) = [-sign(x[1]) + 2*sign(x[2]), -2*sign(x[1]) - sign(x[2])]

using LinearAlgebra
N = 1000
x₀ = [-1.0, 1.0]    
x = [x₀]
tfinal = norm(x₀,1)/2
tfinal = 5.0
h = tfinal/N 
t = range(0.0, step=h, stop=tfinal)

for i=1:N
    xnext = x[i] + h*f(x[i]) 
    push!(x,xnext)
end

X = [getindex.(x, i) for i in 1:length(x[1])]

Plots.plot(t,X,lw=3,label=false,xlabel="t",ylabel="x")
Figure 4: Solution trajectory for a discontinuous system obtained by the forward Euler method

Then number of steps is

Show the code
length(t)
1001

which is significantly less than before, but the solution is not perfect – notice the chattering. It could be reduced by using a smaller step size, but again, at the cost of a longer simulation time.

Backward Euler

As an alternative to the forward Euler method, we can use the backward Euler method. The computation of the next state is done by solving the nonlinear equations \begin{aligned} {\color{blue} x_{1,k+1}} &= x_{1,k} + h (-\operatorname{sign} {\color{blue}x_{1,k+1}} + 2 \operatorname{sign} {\color{blue}x_{2,k+1}})\\ {\color{blue} x_{2,k+1}} &= x_{2,k} + h (-2\operatorname{sign} {\color{blue}x_{1,k+1}} - \operatorname{sign} {\color{blue}x_{2,k+1}}). \end{aligned}

The discontinuities on the right-hand sides constitute a challenge for solvers of nonlinear equations – recall that the Newton’s method requires the first derivatives (assembled into the Jacobian) or their approximation. Instead of this struggle, we can use the linear complementarity problem (LCP) formulation.

Formulation of the backward Euler using LCP

Instead solving the above nonlinear equations with discontinuities, we introduce new variables u_1 and u_2 as the outputs of the \operatorname{sign} functions: \begin{aligned} {\color{blue} x_{1,k+1}} &= x_{1,k} + h (-{\color{blue}u_{1}} + 2 {\color{blue}u_{2}})\\ {\color{blue} x_{2,k+1}} &= x_{2,k} + h (-2{\color{blue}u_{1}} - {\color{blue}u_{2}}). \end{aligned}

But now we have to enforce the relationship between \bm u and \bm x_{k+1}. Recall the standard definition of the \operatorname{sign} function is \operatorname{sign}(x) = \begin{cases} 1 & x>0\\ 0 & x=0\\ -1 & x<0, \end{cases} but we change the definition to a set-valued function \begin{cases} \operatorname{sign}(x) = 1 & x>0\\ \operatorname{sign}(x) \in [-1,1] & x=0\\ \operatorname{sign}(x) = -1 & x<0. \end{cases}

Accordingly, we set the relationship between \bm u and \bm x to \begin{cases} u_1 = 1 & x_1>0\\ u_1 \in [-1,1] & x_1=0\\ u_1 = -1 & x_1<0, \end{cases} and \begin{cases} u_2 = 1 & x_2>0\\ u_2 \in [-1,1] & x_2=0\\ u_2 = -1 & x_2<0. \end{cases}

But these are mixed complementarity contraints we have defined previously! We can thus write the single step of the simulation algorithm as the MCP \boxed{ \begin{aligned} \begin{bmatrix} {\color{blue} x_{1,k+1}}\\ {\color{blue} x_{2,k+1}} \end{bmatrix} &= \begin{bmatrix} x_{1,k}\\ x_{2,k} \end{bmatrix} + h \begin{bmatrix} -1 & 2 \\ -2 & -1 \end{bmatrix} \begin{bmatrix} {\color{blue}u_{1}}\\ {\color{blue}u_{2}} \end{bmatrix}\\ -1 &\leq {\color{blue} u_1} \leq 1 \quad \bot \quad -{\color{blue}x_{1,k+1}}\\ -1 &\leq {\color{blue} u_2} \leq 1 \quad \bot \quad -{\color{blue}x_{2,k+1}}. \end{aligned} } \tag{1}

We now have enough to start implementing a solver for this system, provided we have an access to a solver the MCP (see the page dedicated to software for complementarity problems). But before we do it, we analyze the problem a bit deeper. In this particular small and simple case, it is still doable with just a pencil and paper.

9 possible combinations

There are now 9 possible combinations of the values of u_1 and u_2, each of them strictly inside their respective intervals, or at the either end. Let’s explore some combinations. We start with x_{1,k+1} = x_{2,k+1} = 0, while u_1 \in [-1,1] and u_2 \in [-1,1]:

\begin{aligned} \begin{bmatrix} 0\\ 0 \end{bmatrix} &= \begin{bmatrix} x_{1,k}\\ x_{2,k} \end{bmatrix} + h \begin{bmatrix} -1 & 2 \\ -2 & -1 \end{bmatrix} \begin{bmatrix} {\color{blue}u_{1}}\\ {\color{blue}u_{2}} \end{bmatrix}\\ & -1 \leq {\color{blue} u_1} \leq 1, \quad -1 \leq {\color{blue} u_2} \leq 1 \end{aligned}

How does the set of states from which the next state is zero look like? We isolate the vector \bm u from the equation and impose the constraints on it: \begin{aligned} -\begin{bmatrix} -1 & 2 \\ -2 & -1 \end{bmatrix}^{-1} \begin{bmatrix} x_{1,k}\\ x_{2,k} \end{bmatrix} &= h \begin{bmatrix} {\color{blue}u_{1}}\\ {\color{blue}u_{2}} \end{bmatrix}\\ -1 \leq {\color{blue} u_1} \leq 1, \quad -1 &\leq {\color{blue} u_2} \leq 1 \end{aligned}

We then get \begin{bmatrix} -h\\-h \end{bmatrix} \leq \begin{bmatrix} 0.2 & 0.4 \\ -0.4 & 0.2 \end{bmatrix} \begin{bmatrix} x_{1,k}\\ x_{2,k} \end{bmatrix} \leq \begin{bmatrix} h\\ h \end{bmatrix}

For h=0.2, the resulting set is a rotated square, as shown in Fig. 5.

Show the code
using LazySets
h = 0.2
H1u = HalfSpace([0.2, 0.4], h)
H2u = HalfSpace([-0.4, 0.2], h)
H1l = HalfSpace(-[0.2, 0.4], h)
H2l = HalfSpace(-[-0.4, 0.2], h)

Ha = H1u  H2u  H1l  H2l

using Plots
Plots.plot(Ha, aspect_ratio=:equal,xlabel="x₁",ylabel="x₂",label=false,xlims=(-1.5,1.5),ylims=(-1.5,1.5))
Precompiling LazySets...
  43239.3 ms  ✓ MathOptInterface
   3765.0 ms  ✓ GLPK
  10290.3 ms  ✓ JuMP
   5696.1 ms  ✓ LazySets
  4 dependencies successfully precompiled in 59 seconds. 83 already precompiled.
Precompiling OptimMOIExt...
   1281.0 ms  ✓ Optim → OptimMOIExt
  1 dependency successfully precompiled in 2 seconds. 92 already precompiled.
Figure 5: The set of states from which the next state is zero (the origin)

Indeed, if the current state is in this rotated square, then the next state will be zero. No more infite spiralling around the origin! No more chattering! Perfect zero.

Another combination

We now consider u_1 = 1, u_2 = 1, which we substitute into Eq. 1: \begin{aligned} \begin{bmatrix} {\color{blue} x_{1,k+1}}\\ {\color{blue} x_{2,k+1}} \end{bmatrix} &= \begin{bmatrix} x_{1,k}\\ x_{2,k} \end{bmatrix} + h \begin{bmatrix} -1 & 2 \\ -2 & -1 \end{bmatrix} \begin{bmatrix} {1}\\ {1} \end{bmatrix}\\ \color{blue}x_{1,k+1} &\geq 0\\ \color{blue}x_{2,k+1} &\geq 0, \end{aligned} which can be reformatted to \begin{bmatrix} x_{1,k}\\ x_{2,k} \end{bmatrix} + h \begin{bmatrix} -1 & 2 \\ -2 & -1 \end{bmatrix} \begin{bmatrix} 1\\ 1 \end{bmatrix}\geq \bm 0, and further to \begin{bmatrix} x_{1,k}\\ x_{2,k} \end{bmatrix} \geq h \begin{bmatrix} -1\\ 3 \end{bmatrix}.

The set of states for which the next state is such that both its state variables are positive (hence u_1 = 1, u_2 = 1) is shown in Fig. 6.

Show the code
using LazySets
h = 0.2
A = [-1.0 2.0; -2.0 -1.0]
u = [1.0, 1.0]
b = h*A*u

H1 = HalfSpace([-1.0, 0.0], b[1])
H2 = HalfSpace([0.0, -1.0], b[2])
Hb = H1  H2

using Plots
Plots.plot(Ha, aspect_ratio=:equal,xlabel="x₁",ylabel="x₂",label=false,xlims=(-1.5,1.5),ylims=(-1.5,1.5))
Plots.plot!(Hb)
Figure 6: The set of states from which the next state has both state variables positive

All nine regions

We plot all nine regions in Fig. 7. Note that we do not color them all – the white regions are to be counted as well.

Show the code
using LazySets
h = 0.2
A = [-1.0 2.0; -2.0 -1.0]

u = [1, -1]
b = h*A*u

H1 = HalfSpace(-[1.0, 0.0], b[1])
H2 = HalfSpace([0.0, 1.0], -b[2])
Hc = H1  H2

u = [-1, 1]
b = h*A*u

H1 = HalfSpace([1.0, 0.0], -b[1])
H2 = HalfSpace(-[0.0, 1.0], b[2])
Hd = H1  H2

u = [-1, -1]
b = h*A*u

H1 = HalfSpace([1.0, 0.0], -b[1])
H2 = HalfSpace([0.0, 1.0], -b[2])
He = H1  H2

using Plots
Plots.plot(Ha, aspect_ratio=:equal,xlabel="x₁",ylabel="x₂",label=false,xlims=(-1.5,1.5),ylims=(-1.5,1.5))
Plots.plot!(Hb)
Plots.plot!(Hc)
Plots.plot!(Hd)
Plots.plot!(He)
Figure 7: The nine regions of the state space for the spiralling system

Solutions using a MCP solver

Now that we have got some insight into the algorithm, we can implement it by wrapping a for-loop around the MCP solver.

Show the code
M = [-1 2; -2 -1]
h = 2e-1
tfinal = 2.0
N = tfinal/h

x0 = [-1.0, 1.0]
x = [x0]

using JuMP
using PATHSolver

for i = 1:N
    model = Model(PATHSolver.Optimizer)
    set_optimizer_attribute(model, "output", "no")
    set_silent(model)
    @variable(model, -1 <= u[1:2] <= 1)
    @constraint(model, -h*M * u - x[end]  u)
    optimize!(model)
    push!(x, x[end]+h*M*value.(u))
end
Precompiling PATHSolver...
   1665.3 ms  ✓ PATHSolver
  1 dependency successfully precompiled in 2 seconds. 65 already precompiled.

The code is admittedly unoptimized (for example, it is not wise to start building the optimization model from the scratch in every iteration), but it was our intention to keep it simple to be read conveniently.

The code outcome can be plotted as in Fig. 8. A solution obtained by a classical (default) ODE solver with default setting is plotted for comparison.

Show the code
t = range(0.0, step=h, stop=tfinal)
X = [getindex.(x, i) for i in 1:length(x[1])]

using Plots
Plots.plot(Ha, aspect_ratio=:equal,xlabel="x₁",ylabel="x₂",label=false,xlims=(-1.5,1.5),ylims=(-1.5,1.5))
Plots.plot!(Hb)
Plots.plot!(Hc)
Plots.plot!(Hd)
Plots.plot!(He)
Plots.plot!(X[1],X[2],xlabel="x₁",ylabel="x₂",label="Time-stepping",aspect_ratio=:equal,lw=3,markershape=:circle)
Plots.plot!(sol[1,:],sol[2,:],label=false,lw=3)
Figure 8: Solution trajectory for a discontinuous system obtained by the MCP solver

It is worth emphasizing that once the solution trajectory hits the blue square, it is just one step from the origin. And once the solution trajectory reaches the origin, it stays there forever, no more spiralling, no more chattering.

Apparently, the chosen step length was too large. If we reduce it, the resulting trajectory resembles the one obtained by a default ODE solver, as shown in Fig. 9.

Show the code
M = [-1 2; -2 -1]
h = 1e-2
tfinal = 2.0
N = tfinal/h

x0 = [-1.0, 1.0]
x = [x0]

using JuMP
using PATHSolver

for i = 1:N
    model = Model(PATHSolver.Optimizer)
    set_optimizer_attribute(model, "output", "no")
    set_silent(model)
    @variable(model, -1 <= u[1:2] <= 1)
    @constraint(model, -h*M * u - x[end]  u)
    optimize!(model)
    push!(x, x[end]+h*M*value.(u))
end

t = range(0.0, step=h, stop=tfinal)
X = [getindex.(x, i) for i in 1:length(x[1])]

Plots.plot(X[1],X[2],xlabel="x₁",ylabel="x₂",label="Time-stepping",aspect_ratio=:equal,lw=3,markershape=:circle)
Plots.plot!(sol[1,:],sol[2,:],lw=3,label="Default ODE solver with adaptive step")
Figure 9: Solution trajectory for a discontinuous system obtained by the MCP solver with a smaller step size

And yet the total number of steps is still much smaller:

Show the code
length(t)
201

Example 2 (Time-stepping for a mechanical system with a hard stop) Let’s now apply the time-stepping method for a mechanical system with a hard stop as described in the Example 2 in the section on complementarity systems. We start by formulating the semi-explicit Euler scheme

\begin{aligned} \bm x_{k+1} &= \bm x_k + h \bm v_{k+1}\\ \bm v_{k+1} &= \bm v_k + h \mathbf A \bm x_{k} + h \mathbf B \bm u_k, \end{aligned} where \mathbf A = \begin{bmatrix} -\frac{k_1+k2}{m_1} & \frac{k_2}{m_1}\\ \frac{k_2}{m_2} & -\frac{k_2}{m_2} \end{bmatrix}, \quad \mathbf B = \begin{bmatrix} \frac{1}{m_1} & -\frac{1}{m_1}\\ 0 & \frac{1}{m_2} \end{bmatrix}.

Substituting from the second to the first equation, we get {\color{blue}\bm x_{k+1}} = \bm x_k + h \bm v_k + h^2 \mathbf A \bm x_k + h^2 \mathbf B {\color{blue}\bm u_k}, in which we again highlight the unknown terms by blue color for convenience. These two vector unknowns must satisfy the following complementarity condition 0 \leq x_{1,k+1} \perp u_{1,k} \geq 0, and 0 \leq (x_{2,k+1} - x_{1,k+1}) \perp u_{2,k} \geq 0.

This can be written compactly as \bm 0 \leq \mathbf C \bm x_{k+1} \perp \bm u_{k} \geq 0, where \mathbf C is known from the previous section as \mathbf C = \begin{bmatrix}1 & 0\\ -1 & 1\end{bmatrix}.

The constraint can be expanded into \bm 0 \leq \mathbf C\left(\bm x_k + h \bm v_k + h^2 \mathbf A \bm x_k + h^2 \mathbf B {\color{blue}\bm u_k}\right) \perp \bm u_{k} \geq 0, which is an LCP problem with \bm q = \mathbf C\left(\bm x_k + h \bm v_k + h^2 \mathbf A \bm x_k\right) and \mathbf M = h^2 \mathbf C \mathbf B.

An implementation of the time-stepping for this system is shown below.

Show the code
m1 = 1.0
m2 = 1.0
k1 = 1.0
k2 = 1.0

A = [-(k1+k2)/m1 k2/m1; k2/m2 -k2/m2]
B = [1/m1 -1/m1; 0 1/m2]
h = 1e-1

x10 = 1.0
x20 = 3.5
v10 = 0.0
v20 = 0.0

x0 = [x10, x20] 
v0 = [v10, v20]

x = [x0]
v = [v0]

tfinal = 10.0
N = tfinal/h

for i = 1:N
    model = Model(PATHSolver.Optimizer)
    set_optimizer_attribute(model, "output", "no")
    set_silent(model)
    @variable(model, u[1:2] >= 0)
    @constraint(model, [1 0; -1 1]*(h^2*B*u + x[end] + h*v[end] + h^2*A*x[end])  u)
    optimize!(model) 
    push!(x, h^2*B*value.(u) + (x[end] + h*v[end] + h^2*A*x[end]))
    push!(v, v[end] + h*A*x[end] + h*B*value.(u))
end

t = range(0.0, step=h, stop=tfinal)
X = [getindex.(x, i) for i in 1:length(x[1])]

Plots.plot(t,X[1],label="x₁",lw=3,xlabel="t",ylabel="Positions",markershape=:circle)
Plots.plot!(t,X[2],label="x₂",lw=3,markershape=:circle)

Note, once again, the amazingly small number of steps

Show the code
length(t)
101

while the shape of the trajectory is already quite accurate (you can try reducing the step lenght by yourself)and whatever chattering is absent in the solution.

Example 3 (Time stepping for a mechanical system with Coulomb friction modelled as complementarity constraints) We consider an object of mass m moving on a horizontal surface. The object is subject to an applied force F_\mathrm{a} and a friction force F_\mathrm{f}, both oriented in the positive direction of position and velocity, as in Fig. 10.

Figure 10: Coulomb friction to be modelled using complementarity constraints

The friction force is modelled as a Coulomb friction model F_\mathrm{f}(t) = -m g \mu\, \mathrm{sgn}(v(t)), where \mu is a coefficient that relates the frictional force to the normal force mg.

The motion equations are \begin{aligned} \dot x(t) &= v(t)\\ \dot v(t) &= \frac{1}{m}(F_\mathrm{a}(t) + F_\mathrm{f}(t)). \end{aligned}

We now express the friction force as a difference of two nonnegative variables F_\mathrm{f}(t) = F^+_\mathrm{f}(t) - F^-_\mathrm{f}(t),\quad F^+_\mathrm{f}(t), \,F^-_\mathrm{f}(t) \geq 0.

And we also introduce another auxiliary variable \nu(t) that is just the absolute value of the velocity \nu(t) = |v(t)|.

With the three new variables we formulate the Coulomb friction model as \begin{aligned} 0 \leq v + \nu &\perp F^+_\mathrm{f} \geq 0,\\ 0 \leq -v + \nu &\perp F^-_\mathrm{f} \geq 0, \\ 0 \leq \mu m g - F^+_\mathrm{f} - F^-_\mathrm{f} &\perp \nu \geq 0. \end{aligned}

If F^-_\mathrm{f} > 0, that is, if the friction force is negative, the second complementarity constraint implies that the velocity is equal to its absolute value, in other words, it is nonnegative, the object moves to the right (or stays still). If the velocity is positive, the first constraint implies that F^+_\mathrm{f} = 0, and the third constraint then implies that F^-_\mathrm{f} = mg\mu. Feel free to explore other cases.

We now proceed to implement the time-stepping method for this system. Once again, we use the semi-explicit Euler scheme \begin{aligned} x_{k+1} &= x_k + h v_{k+1},\\ v_{k+1} &= v_k + \frac{h}{m} (F^+_{\mathrm{f},k} - F^-_{\mathrm{f},k}), \end{aligned} where the velocity is subject to the complementarity constraints \begin{aligned} 0 \leq v_{k+1} + \nu_{k+1} &\perp F^+_{\mathrm{f},k} \geq 0,\\ 0 \leq -v_{k+1} + \nu_{k+1} &\perp F^-_{\mathrm{f},k} \geq 0, \\ 0 \leq \mu m g - F^+_{\mathrm{f},k} - F^-_{\mathrm{f},k} &\perp \nu_{k+1} \geq 0, \end{aligned} into which we substitute for v_{k+1} \boxed{ \begin{aligned} 0 \leq v_k + \frac{h}{m} ({\color{blue}F^+_{\mathrm{f},k}} - {\color{blue}F^-_{\mathrm{f},k}}) + {\color{blue}\nu_{k+1}}\, &\perp\, {\color{blue}F^+_{\mathrm{f},k}} \geq 0,\\ 0 \leq -\left(v_k + \frac{h}{m} ({\color{blue}F^+_{\mathrm{f},k}} - {\color{blue}F^-_{\mathrm{f},k}})\right) + {\color{blue}\nu_{k+1}}\, &\perp\, {\color{blue}F^-_{\mathrm{f},k}} \geq 0, \\ 0 \leq \mu m g - {\color{blue}F^+_{\mathrm{f},k}} - {\color{blue}F^-_{\mathrm{f},k}} \,&\perp\, {\color{blue}\nu_{k+1}} \geq 0, \end{aligned} } in which we highlighted the unknowns by blue color for convenience.

Show the code
using JuMP
using PATHSolver
using Plots

m = 100.0
g = 9.81
μ = 10.0

h = 2e-1

x0 = 0.0
v0 = 100.0

x = [x0]
v = [v0]

tfinal = 5.0
N = tfinal/h

for i = 1:N
    model = Model(PATHSolver.Optimizer)
    set_optimizer_attribute(model, "output", "no")
    set_silent(model)
    @variable(model, vabs >= 0)
    @variable(model, F⁺ >= 0)
    @variable(model, F⁻ >= 0)
    @constraint(model, (v[end] + h/m*(F⁺-F⁻) + vabs)  F⁺)
    @constraint(model, (-(v[end] + h/m*(F⁺-F⁻)) + vabs)  F⁻)
    @constraint(model, (μ*m*g - (F⁺+F⁻))  vabs)
    optimize!(model) 
    push!(v, v[end] + h/m*(value(F⁺)-value(F⁻)))
    push!(x, x[end] + h*v[end])         # Here v[end] on the left already equals v_k+1
end

t = range(0.0, step=h, stop=tfinal)
Plots.plot(t,v,label="v",lw=3,markershape=:circle)
Plots.plot!(t,x,label="x",lw=3,markershape=:circle,xlabel="t")

Once again, we can see that the method correctly simulates the system coming to a complete stop due to friction.

Important

While concluding this section about time-stepping methods, we have to emphasize that this was really just an introduction. But the essence of using complementarity concepts in numerical simulation is perhaps a bit clearer now.

Back to top